EE6132: Panoramic Stitching

Brief Overview

In this problem set, you will implement panoramic stitching. Given two input images, we will "stitch" them together to create a simple panorama. To construct the image panorama, we will use concepts learned in class such as keypoint detection, local invariant descriptors, RANSAC, and perspective warping.

The panoramic stitching algorithm consists of four main steps which we ask you to implement in individual functions:

  1. Detect keypoints and extract local invariant descriptors from two input images.

  2. Match the descriptors between the two images.

  3. Apply RANSAC to estimate a homography matrix between the extracted features.

  4. Apply a perspective transformation using the homography matrix to merge image into a panorama.

Functions to implement (refer to function comments for more detail):

  1. get_features

  2. match_keypoints

  3. find_homography and transform_ransac

  4. panoramic_stitching

Starting

Run the following code to import the modules you'll need. After your finish the assignment, remember to run all cells and save the note book to your local machine as a .ipynb file for Canvas submission.

In [230]:
%matplotlib inline
import cv2
import random
import numpy as np
import matplotlib.pyplot as plt
from google.colab.patches import cv2_imshow
import pdb
from math import floor, ceil
In [231]:
%%capture
! wget -O img1.jpg "https://drive.google.com/uc?export=download&id=1omMydL6ADxq_vW5gl_1EFhdzT9kaMhUt"
! wget -O img2.jpg "https://drive.google.com/uc?export=download&id=12lxB1ArAlwGn97XgBgt-SFyjE7udMGvf"

Visualize Input Images

In [232]:
img1 = plt.imread('img1.jpg')
img2 = plt.imread('img2.jpg')
image1=np.copy(img1)
image2=np.copy(img2)
def plot_imgs(img1, img2):
  fig, ax = plt.subplots(1, 2, figsize=(15, 20))
  for a in ax:
    a.set_axis_off()
  ax[0].imshow(img1,cmap='gray')
  ax[1].imshow(img2,cmap='gray')

plot_imgs(image1, image2)

Compute SURF/ SIFT/ ORB Features and Match Keypoints

  1. Implement get surf features(img) to compute SURF features for both of the given image. Im- plement match keypoints(desc1, desc2) to compute key-point correspondences between the two source images using the ratio test. Run the plotting code to visualize the detected features and result- ing correspondences. (Hint: You can use existing libraries)
  2. Write a function find_homography(pts1, pts2) that takes in two N×2 matrices with the x and y coordinates of matching 2D points in the two images and computes the 3×3 homography H that maps pts1 to pts2. You can implement this function using direct linear transform (DLT). (Hint: You should implement this function on your own)
  3. Your homography-fitting function from (2) will only work well if there are no mismatched features. To make it more robust, implement a function transform ransac(pts1, pts2) that fits a homography using RANSAC. (Hint: You should implement this function on your own)
  4. Write a function panoramic stitching(img1, img2) that produces a panorama from a pair of over- lapping images using your functions from the previous parts. Run the algorithm on the two images provided. (Hint: You should implement this function on your own)
  5. Extend the algorithm to handle n(>2) images, and run it on your own photos, or photos you found online. (Hint: You can use existing libraries)
In [233]:
!pip install opencv-python==3.4.2.16
!pip install opencv-contrib-python==3.4.2.16
Requirement already satisfied: opencv-python==3.4.2.16 in /usr/local/lib/python3.6/dist-packages (3.4.2.16)
Requirement already satisfied: numpy>=1.11.3 in /usr/local/lib/python3.6/dist-packages (from opencv-python==3.4.2.16) (1.18.5)
Requirement already satisfied: opencv-contrib-python==3.4.2.16 in /usr/local/lib/python3.6/dist-packages (3.4.2.16)
Requirement already satisfied: numpy>=1.11.3 in /usr/local/lib/python3.6/dist-packages (from opencv-contrib-python==3.4.2.16) (1.18.5)

Implement get features(img) to compute SURF/SIFT/ORB features for both of the given image. Implement match keypoints(desc1, desc2) to compute key-point correspondences between the two source images using the ratio test. Run the plotting code to visualize the detected features and resulting correspondences. (Hint: You can use existing libraries.)

In [234]:
def get_features(img):
  '''
    Compute SURF/SIFT/ORB features using cv2 library functions. Use default parameters when computing the keypoints.
    Input: 
      img: cv2 image
    Returns:
      keypoints: a list of cv2 keypoints
      descriptors: a list of feature descriptors
  '''
  # ===============================================
  surf = cv2.xfeatures2d.SURF_create()
  if img.shape==3:
      gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
  else:
    gray=img
  keypoints, descriptors = surf.detectAndCompute(gray, None)

  # ===============================================
  return keypoints, descriptors

def match_keypoints(desc_1, desc_2, ratio=0.75):
  '''
    You may use cv2 library functions.
    Input:
      desc_1, desc_2: list of feature descriptors
    Return:
      matches: list of feature matches
  '''
  # ===============================================
  FLANN_INDEX_KDTREE = 0
  index_params = dict(algorithm=FLANN_INDEX_KDTREE, trees=5)
  search_params = dict(checks=50)
  flann = cv2.FlannBasedMatcher(index_params, search_params)
  matches = flann.knnMatch(desc_1,desc_2,k=2)
  good = []
  for i , (m, n) in enumerate(matches):
    if m.distance < 0.75*n.distance:
      # good.append((m.trainIdx, m.queryIdx))
      good.append(m)
  # ===============================================
  return matches,good
In [235]:
kp_1, desc_1 = get_features(img1)
kp_2, desc_2 = get_features(img2)

kp_img1 = cv2.drawKeypoints(img1, kp_1, None, color=(0,255,0), flags=0)
kp_img2 = cv2.drawKeypoints(img2, kp_2, None, color=(0,255,0), flags=0)

print('keypoints for image1 and image2...')
plot_imgs(kp_img1, kp_img2)
keypoints for image1 and image2...
In [236]:
[matches,good_matches] = match_keypoints(desc_1, desc_2)
match_plot = cv2.drawMatches(img1, kp_1, img2, kp_2, good_matches[:100], None, flags=2)
print("feature matcheing between image 1 and 2...")
print('%d feature matches found between image 1 and 2'%len(good_matches))
cv2_imshow(match_plot)
feature matcheing between image 1 and 2...
390 feature matches found between image 1 and 2

Compute Homography Matrix using RANSAC

Write a function find homography(pts1, pts2) that takes in two N×2 matrices with the x and y coordinates of matching 2D points in the two images and computes the 3×3 homography H that maps pts1 to pts2. You can implement this function using direct linear transform (DLT). Report the homography matrix. (Hint: You should implement this function on your own)

In [237]:
def find_homography(points1, points2):
  '''
    Implement Direct Linear Transform to find a homography that estimates the transformation mapping from pts_1 to pts_2.
    e.g. If x is in pts_1 and y is in pts_2, then y = H * x
    Input:
      pts_1, pts_1: (4, 2) matrix 
    Return:
      H: the resultant homography matrix (3 x 3)
  '''
  # ===============================================
  assert(len(points1) == 4)
  assert(len(points2) == 4)

      # 8x9 matrix A based on the formula from the manual sheet.
  A = np.zeros((8,9))
  for i in range(len(points1)):
      A[i*2:i*2 +2] = np.array([[-points1[i][0], -points1[i][1], -1, 0,0,0,
                                  points1[i][0]*points2[i][0], points1[i][1]*points2[i][0], points2[i][0]],
                        [0,0,0, -points1[i][0], -points1[i][1], -1,
                          points1[i][0]*points2[i][1], points1[i][1]*points2[i][1], points2[i][1]]])

  # SVD decomposition on A
  U, s, V_transposed = np.linalg.svd(A, full_matrices=True)
  #print(V_transposed)
  V = np.transpose(V_transposed)

  # homogeneous solution of Ah=0 as the rightmost column vector of V.
  H = V[:,-1].reshape(3,3)

  # Normalize H by 1/h8.
  H /= V[:,-1][-1]
  #H=np.transpose(H)
  # ===============================================
  return H
In [238]:
def numInliers(points1, points2, H, threshold):
    '''
    Computes the number of inliers for the given homography.
    - Project the image points from image 1 to image 2
    - A point is an inlier if the distance between the projected point and
        the point in image 2 is smaller than threshold.
    '''
    inlierCount = 0
    ## Hint: Construct a Homogeneous point of type 'Vec3' before applying H.
    points1_homog = np.vstack((points1, np.ones((1, points1.shape[1]))))
    points2_homog = np.vstack((points2, np.ones((1, points2.shape[1]))))
    points2_estimate_homog = H @ points1_homog
    points2_estimate = points2_estimate_homog / points2_estimate_homog[-1, :]
    distance_vector = np.sqrt(np.sum((points2_estimate - points2_homog) ** 2, axis=0))
    inlierCount = np.sum(distance_vector < threshold)
    inlier_corr=[]
    inlier_points_1=[]
    inlier_points_2=[]
    for i in range(len(points1[1])):
     # for j in range(inlierCount):
        if (distance_vector[i] < threshold):
      # p=np.where(distance_vector < threshold)
            inlier_points_1.append([points1[0][i],points1[1][i]])
            inlier_points_2.append([points2[0][i],points2[1][i]])
            inlier_corr.append([points1[0][i],points1[1][i],points2[0][i],points2[1][i]])
    #inlier_matrix=np.matrix(p)
    return inlierCount,inlier_points_1,inlier_points_2,inlier_corr

Your homography-fitting function from (2) will only work well if there are no mismatched features. To make it more robust, implement a function transform ransac(pts1, pts2) that fits a homography using RANSAC. Run the plotting code to visualize the point correspondences after applying RANSAC. (Hint: You should implement this function on your own)

In [239]:
def transform_ransac(pts_1, pts_2,iterations,threshold):
  '''
    Implement RANSAC to estimate homography matrix.
    Input:
      pts_1, pts_1: (N, 2) matrices
    Return:
      best_model: homography matrix with most inliers
  '''
  # ===============================================
  '''
  RANSAC algorithm.
  '''
  bestInlierCount = 0
  best_model=[]
  
  for i in range(iterations):
      subset1 = []
      subset2 = []

      # Construct the subsets by randomly choosing 4 matches.
      for _ in range(4):
          idx = np.random.randint(0, len(pts_1) - 1)
          subset1.append(pts_1[idx])
          subset2.append(pts_2[idx])
      # Compute the homography for this subset
      #print('subset1',subset1)
      #print('subset2',subset2)
      H = find_homography(subset1, subset2)
      #print('Homography is:',H)
      # Compute the number of inliers
      [inlierCount,inlier_points_1,inlier_points_2,inlier_corr] = numInliers(np.array(pts_1).T, np.array(pts_2).T, H, threshold)
      inliers=[]
      inliers.append(inlier_points_1)
      inliers.append(inlier_points_2)
      #print('%d inlier keypoints found',inlierCount)
      # Keep track of the best homography (use the variables bestH and bestInlierCount)
      if inlierCount > bestInlierCount:
          bestInlierCount = inlierCount
          best_inlier_corr=inlier_corr
          best_inliers=inliers
          best_model = H
  print('best_model is:',best_model)
  print('bestInlierCount',bestInlierCount)
      # ===============================================
  return best_model,bestInlierCount,best_inlier_corr
In [240]:
src_pts = []
dst_pts = []
MIN_MATCH_COUNT = 10
if len(good_matches)>MIN_MATCH_COUNT:
    src_pts = np.float32([ kp_1[m.queryIdx].pt for m in good_matches ]).reshape(-1,2)
    dst_pts = np.float32([ kp_2[m.trainIdx].pt for m in good_matches ]).reshape(-1,2)
 
[Homography,InlierCount,inlier_corr]=transform_ransac(src_pts,dst_pts,2000,5)
Inlier_matrix=np.matrix(inlier_corr)
best_model is: [[ 1.80662599e+00 -4.93593276e-02 -5.08727599e+02]
 [ 2.79126431e-01  1.54943220e+00 -1.86660513e+02]
 [ 1.02773395e-03  2.69026976e-05  1.00000000e+00]]
bestInlierCount 348
In [241]:
############  Homography between image 1 and 2 #############
print('Homography between image 1 and 2')
print(Homography)
Homography between image 1 and 2
[[ 1.80662599e+00 -4.93593276e-02 -5.08727599e+02]
 [ 2.79126431e-01  1.54943220e+00 -1.86660513e+02]
 [ 1.02773395e-03  2.69026976e-05  1.00000000e+00]]
In [242]:
def drawMatches(img1, kp1, img2, kp2, matches, inliers = None):
    # Create a new output image that concatenates the two images together
    rows1 = img1.shape[0]
    cols1 = img1.shape[1]
    rows2 = img2.shape[0]
    cols2 = img2.shape[1]

    out = np.zeros((max([rows1,rows2]),cols1+cols2,3), dtype='uint8')

    # Place the first image to the left
    out[:rows1,:cols1,:] = np.dstack([img1, img1, img1])

    # Place the next image to the right of it
    out[:rows2,cols1:cols1+cols2,:] = np.dstack([img2, img2, img2])

    # For each pair of points we have between both images
    # draw circles, then connect a line between them
    for mat in matches:

        # Get the matching keypoints for each of the images
        img1_idx = mat.queryIdx
        img2_idx = mat.trainIdx

        # x - columns, y - rows
        (x1,y1) = kp1[img1_idx].pt
        (x2,y2) = kp2[img2_idx].pt

        inlier = False

        if inliers is not None:
            for i in inliers:
                if i.item(0) == x1 and i.item(1) == y1 and i.item(2) == x2 and i.item(3) == y2:
                    inlier = True

        # Draw a small circle at both co-ordinates
        cv2.circle(out, (int(x1),int(y1)), 4, (255, 0, 0), 1)
        cv2.circle(out, (int(x2)+cols1,int(y2)), 4, (255, 0, 0), 1)

        # Draw a line in between the two points, draw inliers if we have them
        if inliers is not None and inlier:
            cv2.line(out, (int(x1),int(y1)), (int(x2)+cols1,int(y2)), (0, 255, 0), 1) #### inlier lines
        elif inliers is not None:
            cv2.line(out, (int(x1),int(y1)), (int(x2)+cols1,int(y2)), (0, 0, 255), 1)

        if inliers is None:
            cv2.line(out, (int(x1),int(y1)), (int(x2)+cols1,int(y2)), (255, 0, 0), 1) ####if theree is no inliers

    return out
In [243]:
im1=cv2.cvtColor(img1, cv2.COLOR_BGR2GRAY)
im2=cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY)
matchImg = drawMatches(im1,kp_1,im2,kp_2,good_matches[0:100],Inlier_matrix)
print('point correspondences after applying RANSAC')
print('green_line means homography detected coee point(inliers) , red line means corr. points not identified by homography(outliers)')
cv2_imshow(matchImg)
point correspondences after applying RANSAC
green_line means homography detected coee point(inliers) , red line means corr. points not identified by homography(outliers)

Homography using inbuilt for cross checking

In [244]:
M, mask = cv2.findHomography(src_pts,dst_pts, cv2.RANSAC,10.0)
M
Out[244]:
array([[ 1.78628037e+00, -3.39769420e-02, -5.07377382e+02],
       [ 2.70738521e-01,  1.52846756e+00, -1.80986028e+02],
       [ 9.98516525e-04,  2.38269058e-05,  1.00000000e+00]])

Panoramic Stitching

Write a function panoramic stitching(img1, img2) that produces a panorama from a pair of over- lapping images using your functions from the previous parts. Run the algorithm on the two images provided. Report the panorama stitched image. (Hint: You should implement this function on your own. Use inverse mapping while stitching.)

In [245]:
def inter(r,c,image):
    u=r-np.floor(r)
    t=c-np.floor(c)
    v=0
    [m,n]=image.shape
    if (r<0) or (c<0) or (r> m-1) or (c> n-1):
        v=0
    else:
        a1=int(np.ceil(r))
        #print(a1)
        a2=int(np.floor(r))
        # print(a2)
        b1=int(np.ceil(c))
        # print(b1)
        b2=int(np.floor(c))
        # print(b2)
        v=((1-t)*(1-u)*image[a2,b2]) + (t*(1-u)*image[a2,b1]) + (t*u*image[a1,b1]) + ((1-t)*(u)*image[a1,b2]) 
    return v
In [246]:
def panoramic_stitching_2(img1, img2,Homography):
    '''
      Generate a panoramic image using the obtained homography matrix.
      Input: 
        img1, img2: cv2 images
      Return:
        final_img: cv2 image of panorama
    '''
    #===============================================
    homography_img_1_to_2 = Homography
    #homography_img_2_to_1=np.linalg.inv(homography_img_1_to_2)

    canvas=np.zeros((800,1556))
    offset_row=0
    offset_col=0
    for ii in range (0,canvas.shape[0]):
      for jj in range (0,canvas.shape[1]):
          i=ii-offset_row
          j=jj-offset_col
          tmp=np.matmul((homography_img_1_to_2),[j,i,1])
          i1=tmp[0]/tmp[2]
          j1=tmp[1]/tmp[2]

          
          v1=inter(j1,i1,img2)
          v2=inter(i,j,img1)

          if (v1!=0) and (v2!=0): ###and (v3!=0):
              canvas[ii,jj]=v1
              #canvas2[ii,jj]=v2
          elif (v1!=0) and (v2==0):
              canvas[ii,jj]=v1
          elif (v2!=0) and (v1==0):
              canvas[ii,jj]=v2



    # ===============================================
    return canvas
In [247]:
result_2_panaromic_images = panoramic_stitching_2(im1,im2,Homography)
cv2_imshow(result_2_panaromic_images)

Extension for mosaicing of 3 Images

  1. Extend the algorithm to handle n=3 images, use these given images. I have also provided reference panoramic stitched image to compare your result with. Report the all visualizations as mentioned for n=2 case, along with two homography matrices with respect to middle image.(Hint: You should implement this function on your own. Take the middle image as reference and find the homography transformations with respect to middle image, that doesn't need further transformations and also produce the most aesthetically pleasing panorama.)
In [248]:
img_building_1 = plt.imread('Building/image1.jpg')
img_building_2 = plt.imread('Building/image2.jpg')
img_building_3 = plt.imread('Building/image3.jpg')
image_building_1=np.copy(img_building_1)
image_building_2=np.copy(img_building_2)
image_building_3=np.copy(img_building_3)
def plot_imgs(img1, img2,img3):
  fig, ax = plt.subplots(1, 3, figsize=(15, 20))
  # for a in ax:
  #   a.set_axis_off()
  ax[0].imshow(img1,cmap='gray')
  ax[1].imshow(img2,cmap='gray')
  ax[2].imshow(img3,cmap='gray')

plot_imgs(image_building_1, image_building_2, image_building_3)

1.Finding feature points and correspondences

In [249]:
kp_1, desc_1 = get_features(img_building_1)
kp_2, desc_2 = get_features(img_building_2)
kp_3, desc_3 = get_features(img_building_3)

kp_img1 = cv2.drawKeypoints(img_building_1, kp_1, None, color=(0,255,0), flags=0)
kp_img2 = cv2.drawKeypoints(img_building_2, kp_2, None, color=(0,255,0), flags=0)
kp_img3 = cv2.drawKeypoints(img_building_3, kp_3, None, color=(0,255,0), flags=0)

print('keypoints for image1 , image2 and image3...')
plot_imgs(kp_img1, kp_img2,kp_img3)
keypoints for image1 , image2 and image3...
In [250]:
[matches_12,good_matches_12] = match_keypoints(desc_1, desc_2,0.75)
[matches_23,good_matches_23] = match_keypoints(desc_2, desc_3,0.75)
match_plot_1 = cv2.drawMatches(img_building_1, kp_1, img_building_2,kp_2, good_matches_12[:150], None, flags=2)
match_plot_2 = cv2.drawMatches(img_building_2, kp_2, img_building_3,kp_3, good_matches_23[:150], None, flags=2)
print("feature matching between image 1 and 2...")
cv2_imshow(match_plot_1)
print("%d number of features correspondence found between image 1 and 2"%len(good_matches_12))
print("feature matching between image 2 and 1...")
cv2_imshow(match_plot_2)
print("%d number of features correspondence found between image 1 and 2"%len(good_matches_23))
feature matching between image 1 and 2...
713 number of features correspondence found between image 1 and 2
feature matching between image 2 and 1...
1693 number of features correspondence found between image 1 and 2

2.Finding Homography of image 1 and 3 w.r.t 2

In [251]:
####################### Homography matrix between 1 and 2 #########################
src_pts_12 = []
dst_pts_12 = []
MIN_MATCH_COUNT = 10
if len(good_matches_12)>MIN_MATCH_COUNT:
    src_pts_12 = np.float32([ kp_1[m.queryIdx].pt for m in good_matches_12 ]).reshape(-1,2)
    dst_pts_12 = np.float32([ kp_2[m.trainIdx].pt for m in good_matches_12 ]).reshape(-1,2)
 
[Homography_12,InlierCount_12,inlier_corr_12]=transform_ransac(src_pts_12,dst_pts_12,2000,5)
Inlier_matrix_12=np.matrix(inlier_corr_12)
best_model is: [[ 1.23633143e+00 -8.51758753e-02 -3.09902568e+02]
 [ 1.86777952e-01  1.15620086e+00 -5.51963217e+01]
 [ 3.98285870e-04 -3.28376880e-05  1.00000000e+00]]
bestInlierCount 588
In [252]:
print('Homography between image 2 nad 1...')
Homography_21=np.linalg.inv(Homography_12)
print(Homography_21)
Homography between image 2 nad 1...
[[ 7.26213803e-01  5.99851858e-02  2.28366484e+02]
 [-1.31329938e-01  8.55411773e-01  6.51609845e+00]
 [-2.93553268e-04  4.19849301e-06  9.09258830e-01]]
In [253]:
####################### Homography matrix between 2 and 3 #########################
src_pts_23 = []
dst_pts_23 = []
MIN_MATCH_COUNT = 10
if len(good_matches_23)>MIN_MATCH_COUNT:
    src_pts_23 = np.float32([ kp_2[m.queryIdx].pt for m in good_matches_23 ]).reshape(-1,2)
    dst_pts_23 = np.float32([ kp_3[m.trainIdx].pt for m in good_matches_23 ]).reshape(-1,2)
 
[Homography_23,InlierCount_23,inlier_corr_23]=transform_ransac(src_pts_23,dst_pts_23,2000,5)
Inlier_matrix_23=np.matrix(inlier_corr_23)
best_model is: [[ 1.08562328e+00 -2.40229408e-02 -1.09850262e+02]
 [ 5.41969622e-02  1.03844032e+00 -1.96049947e+01]
 [ 1.45237861e-04 -2.12883756e-05  1.00000000e+00]]
bestInlierCount 1544
In [254]:
print('Homography between image 2 nad 3...')
print(Homography_23)
Homography between image 2 nad 3...
[[ 1.08562328e+00 -2.40229408e-02 -1.09850262e+02]
 [ 5.41969622e-02  1.03844032e+00 -1.96049947e+01]
 [ 1.45237861e-04 -2.12883756e-05  1.00000000e+00]]

3. plotting code to visualize the point correspondences after applying RANSAC

In [255]:
im_building_1=cv2.cvtColor(img_building_1, cv2.COLOR_BGR2GRAY)
im_building_2=cv2.cvtColor(img_building_2, cv2.COLOR_BGR2GRAY)
matchImg_12 = drawMatches(im_building_1,kp_1,im_building_2,kp_2,good_matches_12[0:100],Inlier_matrix_12)
print('point correspondences after applying RANSAC for image 1 and 2...')

print('green_line means homography detected coee point(inliers) , red line means corr. points not identified by homography(outliers)')
cv2_imshow(matchImg_12)
point correspondences after applying RANSAC for image 1 and 2...
green_line means homography detected coee point(inliers) , red line means corr. points not identified by homography(outliers)
In [256]:
im_building_2=cv2.cvtColor(img_building_2, cv2.COLOR_BGR2GRAY)
im_building_3=cv2.cvtColor(img_building_3, cv2.COLOR_BGR2GRAY)
matchImg_23 = drawMatches(im_building_2,kp_2,im_building_3,kp_3,good_matches_23[0:100],Inlier_matrix_23)
print('point correspondences after applying RANSAC for image 2 and 3  ...')

print('green_line means homography detected coee point(inlliers) , red line means corr. points not identified by homography')
cv2_imshow(matchImg_23)
point correspondences after applying RANSAC for image 2 and 3  ...
green_line means homography detected coee point(inlliers) , red line means corr. points not identified by homography

4. Homography Stitching

In [257]:
def panoramic_stitching_3(img1, img2,img3,homography_img_2_to_1,homography_img_2_to_3):
    '''
      Generate a panoramic image using the obtained homography matrix.
      Input: 
        img1, img2: cv2 images
      Return:
        final_img: cv2 image of panorama
    '''
    canvas=np.zeros((700,1500))
    offset_row=80
    offset_col=500
    for ii in range (0,canvas.shape[0]):
      for jj in range (0,canvas.shape[1]):
          i=ii-offset_row
          j=jj-offset_col
          tmp_homo_21=np.matmul((homography_img_2_to_1),[j,i,1])
          #tmp=np.matmul((homography_img_2_to_1),[j,i,1])
          i1=tmp_homo_21[0]/tmp_homo_21[2]
          j1=tmp_homo_21[1]/tmp_homo_21[2]
          tmp_homo_23=np.matmul((homography_img_2_to_3),[j,i,1])

          i3=tmp_homo_23[0]/tmp_homo_23[2]
          j3=tmp_homo_23[1]/tmp_homo_23[2]
          
          v1=inter(j1,i1,img1)
          v2=inter(i,j,img2)
          v3=inter(j3,i3,img3)

          if (v1!=0) and (v2!=0) and (v3!=0):
              canvas[ii,jj]=v3
          elif (v1==0) and (v2==0) and (v3==0):
              canvas[ii,jj]=0
          elif (v1!=0) and (v2==0) and (v3==0):
              canvas[ii,jj]=v1
          elif (v2!=0) and (v3==0) and (v1==0):
              canvas[ii,jj]=v2  
          elif (v3!=0) and (v1==0) and (v2==0):
              canvas[ii,jj]=v3
          elif (v1!=0) and (v2!=0) and (v3==0):
              canvas[ii,jj]=v2
      
          elif (v1==0) and (v3!=0) and (v2!=0):
              canvas[ii,jj]=v3
              canvas1[ii,jj]=v2
              # elseif(v1!=0) and (v2==0) and (v3!=0):
              # canvas(ii,jj)=v1
              

    
    # # ===============================================
    return canvas
In [258]:
result = panoramic_stitching_3(im_building_1,im_building_2,im_building_3,Homography_21,Homography_23)
cv2_imshow(result)